adverse weather condition
RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar
Current methods predominantly rely on LiDAR or camera inputs for 3D occupancy prediction. These methods are susceptible to adverse weather conditions, limiting the all-weather deployment of self-driving cars. To improve perception robustness, we leverage the recent advances in automotive radars and introduce a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction. Our method, RadarOcc, circumvents the limitations of sparse radar point clouds by directly processing the 4D radar tensor, thus preserving essential scene details. RadarOcc innovatively addresses the challenges associated with the voluminous and noisy 4D radar data by employing Doppler bins descriptors, sidelobe-aware spatial sparsification, and range-wise self-attention mechanisms. To minimize the interpolation errors associated with direct coordinate transformations, we also devise a spherical-based feature encoding followed by spherical-to-Cartesian feature aggregation. We benchmark various baseline methods based on distinct modalities on the public K-Radar dataset. The results demonstrate RadarOcc's state-of-the-art performance in radar-based 3D occupancy prediction and promising results even when compared with LiDAR-or camera-based methods. Additionally, we present qualitative evidence of the superior performance of 4D radar in adverse weather conditions and explore the impact of key pipeline components through ablation studies.
- Transportation > Ground > Road (0.39)
- Information Technology > Robotics & Automation (0.39)
- Automobiles & Trucks (0.39)
- Europe > Netherlands > South Holland > Delft (0.04)
- Asia > South Korea > Daejeon > Daejeon (0.04)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (0.85)
- Europe > Netherlands > South Holland > Delft (0.04)
- Asia > South Korea > Daejeon > Daejeon (0.04)
RoSe: Robust Self-supervised Stereo Matching under Adverse Weather Conditions
Wang, Yun, Hu, Junjie, Hou, Junhui, Zhang, Chenghao, Yang, Renwei, Wu, Dapeng Oliver
Abstract--Recent self-supervised stereo matching methods have made significant progress, but their performance significantly degrades under adverse weather conditions such as night, rain, and fog. We identify two primary weaknesses contributing to this performance degradation. First, adverse weather introduces noise and reduces visibility, making CNN-based feature extractors struggle with degraded regions like reflective and textureless areas. Second, these degraded regions can disrupt accurate pixel correspondences, leading to ineffective supervision based on the photometric consistency assumption. T o address these challenges, we propose injecting robust priors derived from the visual foundation model into the CNN-based feature extractor to improve feature representation under adverse weather conditions. We then introduce scene correspondence priors to construct robust supervisory signals rather than relying solely on the photometric consistency assumption. Specifically, we create synthetic stereo datasets with realistic weather degradations. These datasets feature clear and adverse image pairs that maintain the same semantic context and disparity, preserving the scene correspondence property. With this knowledge, we propose a robust self-supervised training paradigm, consisting of two key steps: robust self-supervised scene correspondence learning and adverse weather distillation. Both steps aim to align underlying scene results from clean and adverse image pairs, thus improving model disparity estimation under adverse weather effects. Extensive experiments demonstrate the effectiveness and versatility of our proposed solution, which outperforms existing state-of-the-art self-supervised methods. Disparity estimation from stereo images is a critical task in autonomous driving and scene reconstruction. This work was supported by the InnoHK Initiative of the Government of the Hong Kong SAR and the Laboratory for Artificial Intelligence (AI)-Powered Financial Technologies, with additional support from the Hong Kong Research Grants Council (RGC) grant C1042-23GF and the Hong Kong Innovation and Technology Fund (ITF) grant MHP/061/23. Junjie Hu is with the Chinese University of Hong Kong, Shenzhen, China. Chenghao Zhang is with the Institute of Automation, Chinese Academy of Sciences (CASIA).
- Asia > China > Guangdong Province > Shenzhen (0.24)
- North America > United States > Florida > Alachua County > Gainesville (0.14)
- Asia > China > Hong Kong > Kowloon (0.04)
- (6 more...)
- Government (0.68)
- Education > Educational Setting (0.46)
Adverse Weather-Independent Framework Towards Autonomous Driving Perception through Temporal Correlation and Unfolded Regularization
Kou, Wei-Bin, Zhu, Guangxu, Ye, Rongguang, Lei, Jingreng, Wang, Shuai, Lin, Qingfeng, Tang, Ming, Wu, Yik-Chung
Various adverse weather conditions such as fog and rain pose a significant challenge to autonomous driving (AD) perception tasks like semantic segmentation, object detection, etc. The common domain adaption strategy is to minimize the disparity between images captured in clear and adverse weather conditions. However, domain adaption faces two challenges: (I) it typically relies on utilizing clear image as a reference, which is challenging to obtain in practice; (II) it generally targets single adverse weather condition and performs poorly when confronting the mixture of multiple adverse weather conditions. To address these issues, we introduce a reference-free and Adverse weather condition-independent (Advent) framework (rather than a specific model architecture) that can be implemented by various backbones and heads. This is achieved by leveraging the homogeneity over short durations, getting rid of clear reference and being generalizable to arbitrary weather condition. Specifically, Advent includes three integral components: (I) Locally Sequential Mechanism (LSM) leverages temporal correlations between adjacent frames to achieve the weather-condition-agnostic effect thanks to the homogeneity behind arbitrary weather condition; (II) Globally Shuffled Mechanism (GSM) is proposed to shuffle segments processed by LSM from different positions of input sequence to prevent the overfitting to LSM-induced temporal patterns; (III) Unfolded Regularizers (URs) are the deep unfolding implementation of two proposed regularizers to penalize the model complexity to enhance across-weather generalization. We take the semantic segmentation task as an example to assess the proposed Advent framework. Extensive experiments demonstrate that the proposed Advent outperforms existing state-of-the-art baselines with large margins.
- Transportation > Ground > Road (0.72)
- Information Technology > Robotics & Automation (0.62)
- Automobiles & Trucks (0.62)
Robust Adverse Weather Removal via Spectral-based Spatial Grouping
Jeong, Yuhwan, Yang, Yunseo, Yoon, Youngho, Yoon, Kuk-Jin
Adverse weather conditions cause diverse and complex degradation patterns, driving the development of All-in-One (AiO) models. However, recent AiO solutions still struggle to capture diverse degradations, since global filtering methods like direct operations on the frequency domain fail to handle highly variable and localized distortions. To address these issue, we propose Spectral-based Spatial Grouping Transformer (SSGformer), a novel approach that leverages spectral decomposition and group-wise attention for multi-weather image restoration. SSGformer decomposes images into high-frequency edge features using conventional edge detection and low-frequency information via Singular Value Decomposition. We utilize multi-head linear attention to effectively model the relationship between these features. The fused features are integrated with the input to generate a grouping-mask that clusters regions based on the spatial similarity and image texture. To fully leverage this mask, we introduce a group-wise attention mechanism, enabling robust adverse weather removal and ensuring consistent performance across diverse weather conditions. We also propose a Spatial Grouping Transformer Block that uses both channel attention and spatial attention, effectively balancing feature-wise relationships and spatial dependencies. Extensive experiments show the superiority of our approach, validating its effectiveness in handling the varied and intricate adverse weather degradations.
- Asia > Middle East > Israel (0.04)
- North America > United States > Oklahoma > Beaver County (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
Object detection in adverse weather conditions for autonomous vehicles using Instruct Pix2Pix
Gurbindo, Unai, Brando, Axel, Abella, Jaume, König, Caroline
--Enhancing the robustness of object detection systems under adverse weather conditions is crucial for the advancement of autonomous driving technology. This study presents a novel approach leveraging the diffusion model Instruct Pix2Pix to develop prompting methodologies that generate realistic datasets with weather-based augmentations aiming to mitigate the impact of adverse weather on the perception capabilities of state-of-the-art object detection models, including Faster R-CNN and YOLOv10. Experiments were conducted in two environments, in the CARLA simulator where an initial evaluation of the proposed data augmentation was provided, and then on the real-world image data sets BDD100K and ACDC demonstrating the effectiveness of the approach in real environments. The key contributions of this work are twofold: (1) identifying and quantifying the performance gap in object detection models under challenging weather conditions, and (2) demonstrating how tailored data augmentation strategies can significantly enhance the robustness of these models. This research establishes a solid foundation for improving the reliability of perception systems in demanding environmental scenarios, and provides a pathway for future advancements in autonomous driving. Autonomous driving is one of the most significant technological advancements of the last decade, with the potential to radically transform transportation and urban mobility. This progress has been driven by rapid developments in artificial intelligence, machine learning, and computer vision, aiming to reduce accidents, enhance traffic efficiency, and provide mobility access to individuals with disabilities or those without access to traditional vehicles. Despite these promising advancements, autonomous driving systems face several critical challenges that hinder their implementation and widespread adoption [1].
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.04)
- Research Report (1.00)
- Overview (0.88)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (1.00)
An Iterative Task-Driven Framework for Resilient LiDAR Place Recognition in Adverse Weather
Zhao, Xiongwei, Chen, Xieyuanli, Zhu, Xu, Xie, Xingxiang, Bai, Haojie, Wen, Congcong, Zhou, Rundong, Sun, Qihao
--LiDAR place recognition (LPR) plays a vital role in autonomous navigation. However, existing LPR methods struggle to maintain robustness under adverse weather conditions such as rain, snow, and fog, where weather-induced noise and point cloud degradation impair LiDAR reliability and perception accuracy. T o tackle these challenges, we propose an Iterative T ask-Driven Framework (ITDNet), which integrates a LiDAR Data Restoration (LDR) module and a LiDAR Place Recognition (LPR) module through an iterative learning strategy. These modules are jointly trained end-to-end, with alternating optimization to enhance performance. The core rationale of ITDNet is to leverage the LDR module to recover the corrupted point clouds while preserving structural consistency with clean data, thereby improving LPR accuracy in adverse weather . Simultaneously, the LPR task provides feature pseudo-labels to guide the LDR module's training, aligning it more effectively with the LPR task. T o achieve this, we first design a task-driven LPR loss and a reconstruction loss to jointly supervise the optimization of the LDR module. Furthermore, for the LDR module, we propose a Dual-Domain Mixer (DDM) block for frequency-spatial feature fusion and a Semantic-A ware Generator (SAG) block for semantic-guided restoration. In addition, for the LPR module, we introduce a Multi-Frequency Transformer (MFT) block and a Wavelet Pyramid NetVLAD (WPN) block to aggregate multi-scale, robust global descriptors. Finally, extensive experiments on Weather-KITTI, Boreas, and our proposed Weather-Apollo datasets demonstrate that, ITDNet outperforms existing LPR methods, achieving state-of-the-art performance in adverse weather . Xiongwei Zhao, Xu Zhu and Haojie Bai are with the School of Electronic and Information Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen 518071, China (e-mail: xwzhao@stu.hit.edu.cn,
- Asia > China > Guangdong Province > Shenzhen (0.45)
- Asia > China > Heilongjiang Province > Harbin (0.24)
- North America > United States (0.04)